Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
J Natl Med Assoc ; 2023 May 26.
Article in English | MEDLINE | ID: covidwho-2327603

ABSTRACT

The COVID-19 pandemic has compelled rethinking and changes in medical education, the most controversial perhaps being the cancelation of USMLE Step-2 Clinical Skills exam (Step-2 CS). What started in March of 2020 as suspension of this professional licensure exam, because of concerns about infection risk for examinees, standardized patients (SPs), and administrators, soon became permanent cancelation in January 2021. Expectedly, it triggered debate in medical education circles. Positively, however, the USMLE regulatory agencies (NBME and FSMB) saw an opportunity to innovate an exam tainted with perceptions of validity deficits, cost, examinee inconvenience, and worries about future pandemics; they therefore called for a public debate to fashion a way forward. We have approached the issue by defining Clinical Skills (CS), exploring its epistemology and historic evolution, including assessment modalities from Hippocratic times to the modern era. We defined CS as the art of medicine manifest in the physician-patient encounter as history taking (driven by communication skills and cultural competence) and physical examination. We classified CS components into knowledge and psychomotor skill domains, established their relative importance in the physician process (clinical reasoning) of diagnosis, thus establishing a theoretical framework for developing valid, reliable, feasible, fair, and verifiable CS assessment. Given the concerns for COVID-19 and future pandemics, we established that CS can largely be assessed remotely, and what could not, can be assessed locally (school/regional consortia level) as part of a USMLE-regulated/supervised assessment regimen with established national standards, thus maintaining USMLE's fiduciary responsibilities. We have suggested a national/regional program for faculty development in CS curriculum development, and assessment, including standard setting skills. This pool of expert faculty will form the nucleus of our proposed USMLE-regulated External Peer Review Initiative (EPRI). Finally, we suggest that CS evolves into an academic discipline/department of its own, rooted in scholarship.

2.
Journal of Institutional Studies ; 14(3):59-73, 2022.
Article in English | Web of Science | ID: covidwho-2311064

ABSTRACT

The search for new drivers of global socio-economic development in the face of a slowdown (partially due to the COVID-19) put the issues of skills development, including institutional solutions of the human capital formation, at the front line. Despite significant progress in understanding the importance of general skills and their contribution to economic growth, professional skills as the pivot of specific human capital are commonly analyzed through the vague categories like formal qualification and years of tenure. The lack of an institutionalized and commonly accepted practice for measuring professional skills restrains research in economics dealing with returns to skills, as well as institutional studies. The present study aims to partially fill this gap by analyzing the discussion about professional skills in broader discourse about skills and systematizing the existing approaches to professional skills' assessments, as well as by mapping the prospects in the context of emerging digital technologies and institutional change attributed to them. The analysis was conducted on academic papers and reports published in 2013-2020. The research showed that although the number of academic papers focusing on professional skills is high, the discussion (especially, about assessment) is fragmented reflecting industries' specifics. The mainstream expert discussion on skills and their assessment tends to focus on general skills, overlooking professional skills, with partial exception of certain digital skills. Discussion largely grounds on traditional approaches, which cannot produce scalable and comparable results for further economic analysis. At the same time, new digital assessment tools are not yet widely disseminated. There is a need for further improvement and expansion of traditional skills assessments via exams or tests, alongside with searching for novel approaches to measuring skills acquired and being used on the workplace. This could also contribute to the institutional studies dealing with the research of transformations happened in corporate, national, and international level.

3.
BMC Med Educ ; 23(1): 153, 2023 Mar 11.
Article in English | MEDLINE | ID: covidwho-2262864

ABSTRACT

BACKGROUND: Non-technical skills (NTS) assessment tools are widely used to provide formative and summative assessment for healthcare professionals and there are now many of them. This study has examined three different tools designed for similar settings and gathered evidence to test their validity and usability. METHODS: Three NTS assessment tools designed for use in the UK were used by three experienced faculty to review standardized videos of simulated cardiac arrest scenarios: ANTS (Anesthetists' Non-Technical Skills), Oxford NOTECHS (Oxford NOn-TECHnical Skills) and OSCAR (Observational Skill based Clinical Assessment tool for Resuscitation). Internal consistency, interrater reliability and quantitative and qualitative analysis of usability were analyzed for each tool. RESULTS: Internal consistency and interrater reliability (IRR) varied considerably for the three tools across NTS categories and elements. Intraclass correlation scores of three expert raters ranged from poor (task management in ANTS [0.26] and situation awareness (SA) in Oxford NOTECHS [0.34]) to very good (problem solving in Oxford NOTECHS [0.81] and cooperation [0.84] and SA [0.87] in OSCAR). Furthermore, different statistical tests of IRR produced different results for each tool. Quantitative and qualitative examination of usability also revealed challenges in using each tool. CONCLUSIONS: The lack of standardization of NTS assessment tools and training in their use is unhelpful for healthcare educators and students. Educators require ongoing support in the use of NTS assessment tools for the evaluation of individual healthcare professionals or healthcare teams. Summative or high-stakes examinations using NTS assessment tools should be undertaken with at least two assessors to provide consensus scoring. In light of the renewed focus on simulation as an educational tool to support and enhance training recovery in the aftermath of COVID-19, it is even more important that assessment of these vital skills is standardized, simplified and supported with adequate training.


Subject(s)
COVID-19 , Clinical Competence , Humans , Adult , Reproducibility of Results , Health Personnel , Educational Measurement
4.
Surg Endosc ; 2022 Aug 18.
Article in English | MEDLINE | ID: covidwho-2243633

ABSTRACT

BACKGROUND: Early introduction and distributed learning have been shown to improve student comfort with basic requisite suturing skills. The need for more frequent and directed feedback, however, remains an enduring concern for both remote and in-person training. A previous in-person curriculum for our second-year medical students transitioning to clerkships was adapted to an at-home video-based assessment model due to the social distancing implications of COVID-19. We aimed to develop an Artificial Intelligence (AI) model to perform video-based assessment. METHODS: Second-year medical students were asked to submit a video of a simple interrupted knot on a penrose drain with instrument tying technique after self-training to proficiency. Proficiency was defined as performing the task under two minutes with no critical errors. All the videos were first manually rated with a pass-fail rating and then subsequently underwent task segmentation. We developed and trained two AI models based on convolutional neural networks to identify errors (instrument holding and knot-tying) and provide automated ratings. RESULTS: A total of 229 medical student videos were reviewed (150 pass, 79 fail). Of those who failed, the critical error distribution was 15 knot-tying, 47 instrument-holding, and 17 multiple. A total of 216 videos were used to train the models after excluding the low-quality videos. A k-fold cross-validation (k = 10) was used. The accuracy of the instrument holding model was 89% with an F-1 score of 74%. For the knot-tying model, the accuracy was 91% with an F-1 score of 54%. CONCLUSIONS: Medical students require assessment and directed feedback to better acquire surgical skill, but this is often time-consuming and inadequately done. AI techniques can instead be employed to perform automated surgical video analysis. Future work will optimize the current model to identify discrete errors in order to supplement video-based rating with specific feedback.

5.
Handbook of research on updating and innovating health professions education: Post-pandemic perspectives ; : 298-323, 2022.
Article in English | APA PsycInfo | ID: covidwho-1903604

ABSTRACT

Health professional education is designed to help learners gain the knowledge, skills, and attitudes needed for practice. There has been extensive reform in health professional curriculums to emphasize the teaching, development, and assessment of clinical skills. As medical education continues to evolve due to changes in healthcare, and with the ever-increasing growth of technology, it is important to ensure that health professional students are ready to practice successfully. Many curriculums have incorporated clinical skills laboratories to provide learners a safe and protected environment to practice those skills necessary for their profession. Thus, students must acquire, maintain, and enhance their clinical skills techniques as they progress in their education and be properly assessed before they approach real patients. The emergence of the COVID-19 pandemic required educational transition to a remote platform, providing both challenges and opportunities for health education. This chapter reviews how remote skills-based courses can teach and assess clinical skills effectively. (PsycInfo Database Record (c) 2022 APA, all rights reserved)

6.
Information Technologies and Learning Tools ; 87(1):185-198, 2022.
Article in Ukrainian | Web of Science | ID: covidwho-1856703

ABSTRACT

With most switching to distance education due to the pandemic caused by COVID-19, the efficacity of assessment is a major concern. The research was aimed to analyze how Moodle learning management system (LMS) can be applied to online language testing, study the effectiveness of online testing and compare the students' and teachers' attitude towards online testing in General English university course. Online tests were administrated as a synchronous component of the distance learning to 857 first-year bachelor's degree students of "Kyiv-Mohyla Academy" (Ukraine) by 20 teachers during the 2020-2021 academic year. A mixed research design was employed, which involved collecting data using an online questionnaire completed by students and teachers anonymously as Microsoft Forms;Excel spreadsheets were used for the analysis afterward. A quantitative descriptive study was conducted to evaluate the students' and teachers' satisfaction with online testing. The expert evaluation method was prioritized to define the expediency of the effectiveness of the online test by the specified criteria and indicators based on the judgments expressed by 7 experienced teachers who are competent in test design. In addition, Pearson's correlation coefficient was calculated to compare the results of an online test and oral exam. The qualitative research method allowed to analyse and interpret data of the experimental learning. Based on the results of our study, we can conclude that different types of Moodle LMS questions can be successfully applied to online language testing as part of course assessment at the university level. The paper argues that online language testing can be effective and relevant to course objectives from both students' and teachers' perspectives with positive washback on education. The results of the study can be employed by university teachers for language course design both for distance and blended learning.

7.
Diagnostics (Basel) ; 11(11)2021 Oct 26.
Article in English | MEDLINE | ID: covidwho-1488509

ABSTRACT

Proper specimen collection is the most important step to ensure accurate testing for the coronavirus disease 2019 (COVID-19) and other infectious diseases. Assessment of healthcare workers' upper respiratory tract specimen collection skills is needed to ensure samples of high-quality clinical specimens for COVID-19 testing. This study explored the validity evidence for a theoretical MCQ-test and checklists developed for nasopharyngeal (NPS) and oropharyngeal (OPS) specimen collection skills assessment. We found good inter-item reliability (Cronbach's alpha = 0.76) for the items of the MCQ-test and high inter-rater reliability using the checklist for the assessment of OPS and NPS skills on 0.86 and 0.87, respectively. The MCQ scores were significantly different between experts (mean 98%) and novices (mean 66%), p < 0.001, and a pass/fail score of 91% was established. We found a significant discrimination between checklist scores of experts (mean 95% score for OPS and 89% for NPS) and novices (mean 50% score for OPS and 36% for NPS), p < 0.001, and a pass/fail score was established of 76% for OPS and 61% for NPS. Further, the results also demonstrated that a group of non-healthcare educated workers can perform upper respiratory tract specimen collection comparably to experts after a short and focused simulation-based training session. This study, therefore, provides validity evidence for the use of a theoretical and practical test for upper respiratory specimens' collection skills that can be used for competency-based training of the workers in the COVID-19 test centers.

8.
J Gen Intern Med ; 35(9): 2675-2679, 2020 09.
Article in English | MEDLINE | ID: covidwho-635087

ABSTRACT

INTRODUCTION: Hospital and ambulatory care systems are rapidly building their virtual care capacity in response to the novel coronavirus (COVID-19) pandemic. The use of resident trainees in telemedicine is one area of potential development and expansion. To date, however, training opportunities in this field have been limited, and residents may not be adequately prepared to provide high-quality telemedicine care. AIM: This study evaluates the impact of an adapted telemedicine Objective Structured Clinical Examination (OSCE) on telemedicine-specific training competencies of residents. SETTING: Primary Care Internal Medicine residents at a large urban academic hospital. PROGRAM DESCRIPTION: In March 2020, the New York University Grossman School of Medicine Primary Care program adapted its annual comprehensive OSCE to a telemedicine-based platform, to comply with distance learning and social distancing policies during the COVID-19 pandemic. A previously deployed in-person OSCE on the subject of a medical error was adapted to a telemedicine environment and deployed to 23 primary care residents. Both case-specific and core learning competencies were assessed, and additional observations were conducted on the impact of the telemedicine context on the encounter. PROGRAM EVALUATION: Three areas of telemedicine competency need were identified in the OSCE case: technical proficiency; virtual information gathering, including history, collateral information collection, and physical exam; and interpersonal communication skills, both verbal and nonverbal. Residents expressed enthusiasm for telemedicine training, but had concerns about their preparedness for telemedicine practice and the need for further competency and curricular development. DISCUSSION: Programs interested in building capacity among residents to perform telemedicine, particularly during the COVID-19 pandemic, can make significant impact in their trainees' comfort and preparedness by addressing key issues in technical proficiency, history and exam skills, and communication. Further research and curricular development in digital professionalism and digital empathy for trainees may also be beneficial.


Subject(s)
Betacoronavirus , Capacity Building/methods , Clinical Competence , Coronavirus Infections/therapy , Internship and Residency/methods , Pneumonia, Viral/therapy , Telemedicine/methods , COVID-19 , Capacity Building/trends , Coronavirus Infections/diagnosis , Coronavirus Infections/epidemiology , Disease Outbreaks/prevention & control , Humans , Internship and Residency/trends , Pandemics , Pneumonia, Viral/diagnosis , Pneumonia, Viral/epidemiology , Primary Health Care/methods , Primary Health Care/trends , Program Evaluation/methods , SARS-CoV-2 , Telemedicine/trends
SELECTION OF CITATIONS
SEARCH DETAIL